41 research outputs found

    Recent Results on the Implementation of a Burst Error and Burst Erasure Channel Emulator Using an FPGA Architecture

    Get PDF
    The behaviour of a transmission channel may be simulated using the performance abilities of current generation multiprocessing hardware, namely, a multicore Central Processing Unit (CPU), a general purpose Graphics Processing Unit (GPU), or a Field Programmable Gate Array (FPGA). These were investigated by Cullinan et al. in a recent paper (published in 2012) where these three devices capabilities were compared to determine which device would be best suited towards which specific task. In particular, it was shown that, for the application which is objective of our work (i.e., for a transmission channel simulation), the FPGA is 26.67 times faster than the GPU and 10.76 times faster than the CPU. Motivated by these results, in this paper we propose and present a direct hardware emulation. In particular, a Cyclone II FPGA architecture is implemented to simulate a burst error channel behaviour, in which errors are clustered together, and a burst erasure channel behaviour, in which the erasures are clustered together. The results presented in the paper are valid for any FPGA architecture that may be considered for this scope

    More Accurate Analysis of Sum-Product Decoding of LDPC codes Using a Gaussian Approximation

    Get PDF
    This letter presents a more accurate mathematical analysis, with respect to the one performed in Chung et al.\u2019s 2001 paper, of belief-propagation decoding for Low-Density Parity- Check (LDPC) codes on memoryless Binary Input - Additive White Gaussian Noise (BI-AWGN) channels, when considering a Gaussian Approximation (GA) for message densities under density evolution. The recurrent sequence, defined in Chung et al.\u2019s 2001 paper, describing the message passing between variable and check nodes, follows from the GA approach and involves the function \u3c6(x), therein defined, and its inverse. The analysis of this function is here resumed and studied in depth, to obtain tighter upper and lower bounds on it. Moreover, unlike the upper bound given in the above cited paper, the tighter upper bound on \u3c6(x) is invertible. This allows a more accurate evaluation of the asymptotical performance of sum-product decoding of LDPC codes when a GA is assumed

    On rate-compatible punctured turbo codes design

    Get PDF
    We propose and compare some design criteria for the search of good systematic rate-compatible punctured turbo code (RCPTC) families. The considerations presented by S. Benedetto et al. (1998) to find the "best" component encoders for turbo code construction are extended to find good rate-compatible puncturing patterns for a given interleaver length . This approach is shown to lead to codes that improve over previous ones, both in the maximum-likelihood sense (using transfer function bounds) and in the iterative decoding sense (through simulation results). To find simulation and analytical results, the coded bits are transmitted over an additive white Gaussian noise (AWGN) channel using an antipodal binary modulation. The two main applications of this technique are its use in hybrid incremental ARQ/FEC schemes and its use to achieve unequal error protection of an information sequence

    Useful Mathematical Tools for Capacity Approaching Codes Design

    Get PDF
    Focus of this letter is the oldest class of codes that can approach the Shannon limit quite closely, i.e., lowdensity parity-check (LDPC) codes, and two mathematical tools that can make their design an easier job under appropriate assumptions. In particular, we present a simple algorithmic method to estimate the threshold for regular and irregular LDPC codes on memoryless binary-input continuous-output AWGN channels with sum-product decoding, and, to determine how close are the obtained thresholds to the theoretical maximum, i.e., to the Shannon limit, we give a simple and invertible expression of the AWGN channel capacity in the binary input - soft output case. For these codes, the thresholds are defined as the maximum noise level such that an arbitrarily small bit-error probability can be achieved as the block length tends to infinity. We assume a Gaussian approximation for message densities under density evolution, a widely used simplification of the decoding algorithm

    Limiting Performance of Millimeter-Wave Communications in the Presence of a 3D Random Waypoint Mobility Model

    Get PDF
    This paper proposes a mathematical framework for evaluating the limiting capacity of a millimeter-wave (mmWave) communication involving a mobile user (MU) and a cellular base station. The investigation is realized considering a threedimensional (3D) space in which the random waypoint mobility model is used to probabilistically identify the location of the MUs. Besides, the analysis is developed accounting for path-loss attenuation, directional antenna gains, shadowing, and modulation scheme. Closed-form formulas for the received signal power, the Shannon capacity, and the bit error rate (BER) are obtained for both line-of-sight (LoS) and non-LoS scenarios in the presence of a noise-limited operating regime. The conceived theoretical model is firstly checked by Monte Carlo validations, and then employed to explore the influence of the antenna gain and of the cell radius on the capacity and on the BER of a fifth-generation (5G) link in a 3D environment, taking into account both the 28 and 73 GHz mmWave bands

    Low-Complexity Phase-Only Scanning by Aperiodic Antenna Arrays

    Get PDF
    This letter proposes a simple and fast method for phase-only beam-scanning of linear aperiodic arrays. The method adopts a three-step procedure, consisting in the placement of the array elements by proper distribution functions, in the synthesis of the excitation amplitudes by an extended Gaussian approach, and in the evaluation of the excitation phases by a closed-form shifting. Numerical examples and comparisons with existing approaches are presented to check the effectiveness of the method, with the final aim of confirming its satisfactory behavior in terms of trade-off between accuracy and computational cost

    Low Complexity Rate Compatible Puncturing Patterns Design for LDPC Codes

    Get PDF
    In contemporary digital communications design, two major challenges should be addressed: adaptability and flexibility. The system should be capable of flexible and efficient use of all available spectrums and should be adaptable to provide efficient support for the diverse set of service characteristics. These needs imply the necessity of limit-achieving and flexible channel coding techniques, to improve system reliability. Low Density Parity Check (LDPC) codes fit such requirements well, since they are capacity-achieving. Moreover, through puncturing, allowing the adaption of the coding rate to different channel conditions with a single encoder/decoder pair, adaptability and flexibility can be obtained at a low computational cost.In this paper, the design of rate-compatible puncturing patterns for LDPCs is addressed. We use a previously defined formal analysis of a class of punctured LDPC codes through their equivalent parity check matrices. We address a new design criterion for the puncturing patterns using a simplified analysis of the decoding belief propagation algorithm, i.e., considering a Gaussian approximation for message densities under density evolution, and a simple algorithmic method, recently defined by the Authors, to estimate the threshold for regular and irregular LDPC codes on memoryless binary-input continuous-output Additive White Gaussian Noise (AWGN) channels

    On the Error Statistics of Turbo Decoding for Hybrid Concatenated Codes Design

    Get PDF
    In this paper we propose a model for the generation of error patterns at the output of a turbo decoder. One of the advantages of this model is that it can be used to generate the error sequence with little effort. Thus, it provides a basis for designing hybrid concatenated codes (HCCs) employing the turbo code as inner code. These coding schemes combine the features of parallel and serially concatenated codes and thus offer more freedom in code design. It has been demonstrated, in fact, that HCCs can perform closer to capacity than serially concatenated codes while still maintaining a minimum distance that grows linearly with block length. In particular, small memory-one component encoders are sufficient to yield asymptotically good code ensembles for such schemes. The resulting codes provide low complexity encoding and decoding and, in many cases, can be decoded using relatively few iterations

    Turbo Codes Construction for Robust Hybrid Multitransmission Schemes

    Get PDF
    In certain applications the user has to cope with some random packet erasures due, e.g., to deep fading conditions on wireless links, or to congestion on wired networks. In other applications, the user has to cope with a pure wireless link, in which all packets are available to him, even if seriously corrupted. The ARQ/FEC schemes already studied and presented in the literature are well optimized only for one of these two applications. In a previous work, the authors aimed at bridging this gap, giving a design method for obtaining hybrid ARQ schemes that perform well in both conditions, i.e., at the presence of packet erasures and packet fading. This scheme uses a channel coding system based on partially-systematic periodically punctured turbo codes. Since the computation of the transfer function and, consequently, the union bound on the Bit or Frame Error Rate of a partiallysystematic punctured turbo code becomes highly intensive as the interleaver size and the puncturing period increase, in this work a simplified and more efficient method to calculate the most significant terms of the average distance spectrum of the turbo encoder is proposed and validated
    corecore